Feasible-side Global Convergence in Experimental Optimization
نویسنده
چکیده
where u ∈ Ru are independent decision variables subject to the bounds u and u – the curly brackets (≺) denoting componentwise inequality – and φ, g : Ru → R are cost and constraint functions, respectively. The characteristic element of (1.1) is the presence of experimental functions, denoted by the subscript p (for “plant”), which may only be evaluated by conducting an experiment for a given choice of u and whose values cannot be known otherwise. In this work, the term “experiment” will be employed to denote a repeatable but expensive task, where “repeatable” means that carrying out the task once with the variables ua and again with ub will yield identical results if ua = ub, while “expensive” implies that carrying out the task either is financially costly (e.g., machining a very expensive space shuttle component), requires a lot of time (e.g., simulating one day of traffic behavior for a large metropolis), or may only be done very infrequently (e.g., producing a large batch of a pharmaceutical compound once every three months). Of course, such expenses are not mutually exclusive and may also occur together. By contrast, the constraints without the p subscript indicate numerical functions that can be easily evaluated for any given u without requiring any experiments. Consequently, we will refer to (1.1) as an experimental optimization problem. The first formal studies on methodologically solving such problems may be traced back to the 1940s, 50s, and 60s, with the works of Hotelling [46], Box [10, 9], Brooks [12, 13], and Spendley et al. [73] essentially representing the foundations of this field. The methods that came out of these works – namely, those of (experimental) steepest ascent, evolutionary operation, response-surface modeling, and the simplex algorithm – have remained popular to the present day and are still employed in a number of diverse applications [40, 44, 41, 42, 3, 4, 64]. Additionally, there are entire fields of research dedicated to solving problems that may be cast in the form of (1.1). We cite, as some examples that we have encountered:
منابع مشابه
Implementation techniques for the SCFO experimental optimization framework
The material presented in this document is intended as a comprehensive, implementation-oriented supplement to the experimental optimization framework presented in [10]. The issues of physical degradation, unknown Lipschitz constants, measurement/estimation noise, gradient estimation, sufficient excitation, and the handling of soft constraints and/or a numerical cost function are all addressed, ...
متن کاملGlobal convergence of Newton's method on an interval
The solution of an equation f ðxÞ 1⁄4 c given by an increasing function f on an interval I and right-hand side c, can be approximated by a sequence calculated according to Newton’s method. In this article, global convergence of the method is considered in the strong sense of convergence for any initial value in I and any feasible right-hand side. The class of functions for which the method conv...
متن کاملGlobal convergence of an inexact interior-point method for convex quadratic symmetric cone programming
In this paper, we propose a feasible interior-point method for convex quadratic programming over symmetric cones. The proposed algorithm relaxes the accuracy requirements in the solution of the Newton equation system, by using an inexact Newton direction. Furthermore, we obtain an acceptable level of error in the inexact algorithm on convex quadratic symmetric cone programmin...
متن کاملA New Hybrid Flower Pollination Algorithm for Solving Constrained Global Optimization Problems
Global optimization methods play an important role to solve many real-world problems. Flower pollination algorithm (FP) is a new nature-inspired algorithm, based on the characteristics of flowering plants. In this paper, a new hybrid optimization method called hybrid flower pollination algorithm (FPPSO) is proposed. The method combines the standard flower pollination algorithm (FP) with the par...
متن کاملGlobal Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases wh...
متن کاملOn the Convergence of Adaptive Stochastic Search Methods for Constrained and Multi-objective Black-Box Optimization
Stochastic search methods for global optimization and multi-objective optimization are widely used in practice, especially on problems with black-box objective and constraint functions. Although there are many theoretical results on the convergence of stochastic search methods, relatively few deal with black-box constraints and multiple black-box objectives and previous convergence analyses req...
متن کامل